16 research outputs found

    Vowel priority lip matching scheme and similarity evaluation model based on humanoid robot Ren-Xin

    Get PDF
    At present, the significance of humanoid robots dramatically increased while this kind of robots rarely enters human life because of its immature development. The lip shape of humanoid robots is crucial in the speech process since it makes humanoid robots look like real humans. Many studies show that vowels are the essential elements of pronunciation in all languages in the world. Based on the traditional research of viseme, we increased the priority of the smooth transition of lip between vowels and propose a lip matching scheme based on vowel priority. Additionally, we also designed a similarity evaluation model based on the Manhattan distance by using computer vision lip features, which quantifies the lip shape similarity between 0-1 provides an effective recommendation of evaluation standard. Surprisingly, this model successfully compensates the disadvantages of lip shape similarity evaluation criteria in this field. We applied this lip-matching scheme to Ren-Xin humanoid robot and performed robot teaching experiments as well as a similarity comparison experiment of 20 sentences with two males and two females and the robot. Notably, all the experiments have achieved excellent results

    Classification of Known and Unknown Environmental Sounds Based on Self-Organized Space Using a Recurrent Neural Network

    Get PDF
    Our goal is to develop a system to learn and classify environmental sounds for robots working in the real world. In the real world, two main restrictions pertain in learning. (i) Robots have to learn using only a small amount of data in a limited time because of hardware restrictions. (ii) The system has to adapt to unknown data since it is virtually impossible to collect samples of all environmental sounds. We used a neuro-dynamical model to build a prediction and classification system. This neuro-dynamical model can self-organize sound classes into parameters by learning samples. The sound classification space, constructed by these parameters, is structured for the sound generation dynamics and obtains clusters not only for known classes, but also unknown classes. The proposed system searches on the basis of the sound classification space for classifying. In the experiment, we evaluated the accuracy of classification for both known and unknown sound classes

    Developmental Human-Robot Imitation Learning of Drawing with a Neuro Dynamical System

    Full text link
    Abstract—This paper mainly deals with influences of teach-ing style and developmental processes in learning model to the acquired representations (primitives). We investigate these in-fluences by introducing a hierarchical recurrent neural network for robot model, and a form of motionese (a caregiver’s use of simpler and more exaggerated motions when showing a task to an infants). We modified a Multiple Timescales Recurrent Neural Network (MTRNN) for robot’s self-model. The number of layers in the MTRNN increases according to learn complex events. We investigate our approach with a humanoid robot “Actroid ” through conducting an imitation experiment in which a human caregiver gives the robot a task of pushing two buttons. Experiment results and analysis confirm that learning with phased teaching and structuring enables to acquire the clear motion primitives as the activities in the fast context layer of MTRNN and to the robot to handle unknown motions. I

    ヨソク シンライセイ ニ モトズク ドウサ セイセイ ノ タメ ノ フヘンコウ ノ ジコ ソシキカ

    No full text
    京都大学0048新制・課程博士博士(情報学)甲第14761号情博第336号新制||情||63(附属図書館)UT51-2009-D473京都大学大学院情報学研究科知能情報学専攻(主査)教授 奥乃 博, 教授 乾 敏郎, 准教授 尾形 哲也学位規則第4条第1項該当Doctor of InformaticsKyoto UniversityDA

    Inter-modality mapping in robot with recurrent neural network

    Get PDF
    A system for mapping between different sensory modalities was developed for a robot system to enable it to generate motions expressing auditory signals and sounds generated by object movement. A recurrent neural network model with parametric bias, which has good generalization ability, is used as a learning model. Since the correspondences between auditory signals and visual signals are too numerous to memorize, the ability to generalize is indispensable. This system was implemented in the “Keepon” robot, and the robot was shown horizontal reciprocating or rotating motions with the sound of friction and falling or overturning motion with the sound of collision by manipulating a box object. Keepon behaved appropriately not only from learned events but also from unknown events and generated various sounds in accordance with observed motions

    Modeling Tool-Body Assimilation using Second-order Recurrent Neural Network

    No full text
    Abstract — Tool-body assimilation is one of the intelligent human abilities. Through trial and experience, humans are capable of using tools as if they are part of their own bodies. This paper presents a method to apply a robot’s active sensing experience for creating the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool recognition module. Self-Organizing Map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple Time-scales Recurrent Neural Network (MTRNN) is used as the dynamics learning module. Parametric Bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments are performed with HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robot’s dynamic properties change when holding a tool. The results of the experiment show that the tool-body assimilation model is capable of applying to unknown objects to generate goal-oriented motions. I

    Motion Generation Based on Reliable Predictability using Self-organized Object Features

    No full text
    Abstract — Predictability is an important factor for determining robot motions. This paper presents a model to generate robot motions based on reliable predictability evaluated through a dynamics learning model which self-organizes object features. The model is composed of a dynamics learning module, namely Recurrent Neural Network with Parametric Bias (RNNPB), and a hierarchical neural network as a feature extraction module. The model inputs raw object images and robot motions. Through bi-directional training of the two models, object features which describe the object motion are self-organized in the output of the hierarchical neural network, which is linked to the input of RNNPB. After training, the model searches for the robot motion with high reliable predictability of object motion. Experiments were performed with the robot’s pushing motion with a variety of objects to generate sliding, falling over, bouncing, and rolling motions. For objects with single motion possibility, the robot tended to generate motions that induce the object motion. For objects with two motion possibilities, the robot evenly generated motions that induce the two object motions. I
    corecore